Test statistic is a quantity derived from the sample for statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). Statistical Inference, Duxbury Press, Second Edition (p.374) A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish the null hypothesis from the alternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis.
An important property of a test statistic is that its sampling distribution under the null hypothesis must be calculable, either exactly or approximately, which allows p-value to be calculated. A test statistic shares some of the same qualities of a descriptive statistic, and many statistics can be used as both test statistics and descriptive statistics. However, a test statistic is specifically intended for use in statistical testing, whereas the main quality of a descriptive statistic is that it is easily interpretable. Some informative descriptive statistics, such as the sample range, do not make good test statistics since it is difficult to determine their sampling distribution.
Two widely used test statistics are the t-statistic and the F-test.
Using one of these sampling distributions, it is possible to compute either a two-tailed test p-value for the null hypothesis that the coin is fair. The test statistic in this case reduces a set of 100 numbers to a single numerical summary that can be used for testing.
Two-sample tests are appropriate for comparing two samples, typically experimental and control samples from a scientifically controlled experiment.
Paired tests are appropriate for comparing two samples where it is impossible to control important variables. Rather than comparing two sets, members are paired between samples so the difference between the members becomes the sample. Typically the mean of the differences is then compared to zero. The common example scenario for when a paired difference test is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect.
are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation.
A t-test is appropriate for comparing means under relaxed conditions (less is assumed).
Tests of proportions are analogous to tests of means (the 50% proportion).
Chi-squared tests use the same calculations and the same probability distribution for different applications:
(analysis of variance, ANOVA) are commonly used when deciding whether groupings of data by category are meaningful. If the variance of test scores of the left-handed in a class is much smaller than the variance of the whole class, then it may be useful to study lefties as a group. The null hypothesis is that two variances are the same – so the proposed grouping is not meaningful.
In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found in other articles. Proofs exist that the test statistics are appropriate. Abstract: "The focus was on the Neyman–Pearson approach to hypothesis testing. A brief historical development of the Neyman–Pearson approach is followed by mathematical proofs of each of the hypothesis tests covered in the reference material." The proofs do not reference the concepts introduced by Neyman and Pearson, instead they show that traditional test statistics have the probability distributions ascribed to them, so that significance calculations assuming those distributions are correct. The thesis information is also posted at mathnstats.com as of April 2013.
One-sample z-test | (Normal population or n large) and σ known. ( z is the distance from the mean in relation to the standard error). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within k standard deviations for any k (see: Chebyshev's inequality). | |
Two-sample z-test | Normal population and independent observations and σ1 and σ2 are known where is the value of under the null hypothesis | |
One-sample t-test | | (Normal population or n large) and unknown |
Paired t-test | (Normal population of differences or n large) and unknown | |
Two-sample pooled t-test, equal variances |
NIST handbook: Two-Sample t-test for Equal Means | (Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 unknown |
Two-sample unpooled t-test, unequal variances (Welch's t-test) | (Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 both unknown | |
One-proportion z-test | n .p0 > 10 and n (1 − p0) > 10 and it is a SRS (Simple Random Sample), see notes. | |
Two-proportion z-test, pooled for | n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes. | |
>0 | n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes. | |
Chi-squared test for variance | df = n-1 • Normal population | |
Chi-squared test for goodness of fit | df = k − 1 − # parameters estimated, and one of these must hold.
• All expected counts are at least 5.Steel, R. G. D., and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 350.
• All expected counts are > 1 and no more than 20% of expected counts are less than 5 | |
Two-sample F test for equality of variances | Normal populations Arrange so and reject H0 for NIST handbook: F-Test for Equality of Two Standard Deviations (Testing standard deviations the same as testing variances) | |
Regression t-test of | Reject H0 for Steel, R. G. D., and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288.) *Subtract 1 for intercept; k terms contain independent variables. | |
|
|